Goto

Collaborating Authors

 consequence relation


Rational Inference in Formal Concept Analysis

Carr, Lucas, Leisegang, Nicholas, Meyer, Thomas, Obiedkov, Sergei

arXiv.org Artificial Intelligence

Defeasible conditionals are a form of non-monotonic inference which enable the expression of statements like "if $ϕ$ then normally $ψ$". The KLM framework defines a semantics for the propositional case of defeasible conditionals by construction of a preference ordering over possible worlds. The pattern of reasoning induced by these semantics is characterised by consequence relations satisfying certain desirable properties of non-monotonic reasoning. In FCA, implications are used to describe dependencies between attributes. However, these implications are unsuitable to reason with erroneous data or data prone to exceptions. Until recently, the topic of non-monotonic inference in FCA has remained largely uninvestigated. In this paper, we provide a construction of the KLM framework for defeasible reasoning in FCA and show that this construction remains faithful to the principle of non-monotonic inference described in the original framework. We present an additional argument that, while remaining consistent with the original ideas around non-monotonic reasoning, the defeasible reasoning we propose in FCA offers a more contextual view on inference, providing the ability for more relevant conclusions to be drawn when compared to the propositional case.


Inference of Abstraction for Grounded Predicate Logic

Kido, Hiroyuki

arXiv.org Artificial Intelligence

An important open question in AI is what simple and natural principle enables a machine to reason logically for meaningful abstraction with grounded symbols. This paper explores a conceptually new approach to combining probabilistic reasoning and predicative symbolic reasoning over data. We return to the era of reasoning with a full joint distribution before the advent of Bayesian networks. We then discuss that a full joint distribution over models of exponential size in propositional logic and of infinite size in predicate logic should be simply derived from a full joint distribution over data of linear size. We show that the same process is not only enough to generalise the logical consequence relation of predicate logic but also to provide a new perspective to rethink well-known limitations such as the undecidability of predicate logic, the symbol grounding problem and the principle of explosion. The reproducibility of this theoretical work is fully demonstrated by the included proofs.


Non-monotonic Extensions to Formal Concept Analysis via Object Preferences

Carr, Lucas, Leisegang, Nicholas, Meyer, Thomas, Rudolph, Sebastian

arXiv.org Artificial Intelligence

Formal Concept Analysis (FCA) is an approach to creating a conceptual hierarchy in which a \textit{concept lattice} is generated from a \textit{formal context}. That is, a triple consisting of a set of objects, $G$, a set of attributes, $M$, and an incidence relation $I$ on $G \times M$. A \textit{concept} is then modelled as a pair consisting of a set of objects (the \textit{extent}), and a set of shared attributes (the \textit{intent}). Implications in FCA describe how one set of attributes follows from another. The semantics of these implications closely resemble that of logical consequence in classical logic. In that sense, it describes a monotonic conditional. The contributions of this paper are two-fold. First, we introduce a non-monotonic conditional between sets of attributes, which assumes a preference over the set of objects. We show that this conditional gives rise to a consequence relation that is consistent with the postulates for non-monotonicty proposed by Kraus, Lehmann, and Magidor (commonly referred to as the KLM postulates). We argue that our contribution establishes a strong characterisation of non-monotonicity in FCA. Typical concepts represent concepts where the intent aligns with expectations from the extent, allowing for an exception-tolerant view of concepts. To this end, we show that the set of all typical concepts is a meet semi-lattice of the original concept lattice. This notion of typical concepts is a further introduction of KLM-style typicality into FCA, and is foundational towards developing an algebraic structure representing a concept lattice of prototypical concepts.


Defeasible Reasoning on Concepts

Ding, Yiwen, Manoorkar, Krishna, Switrayni, Ni Wayan, Wang, Ruoding

arXiv.org Artificial Intelligence

In this paper, we take first steps toward developing defeasible reasoning on concepts in KLM framework. We define generalizations of cumulative reasoning system C and cumulative reasoning system with loop CL to conceptual setting. We also generalize cumulative models, cumulative ordered models, and preferential models to conceptual setting and show the soundness and completeness results for these models.


Inference of Abstraction for a Unified Account of Reasoning and Learning

Kido, Hiroyuki

arXiv.org Artificial Intelligence

Inspired by Bayesian approaches to brain function in neuroscience, we give a simple theory of probabilistic inference for a unified account of reasoning and learning. We simply model how data cause symbolic knowledge in terms of its satisfiability in formal logic. The underlying idea is that reasoning is a process of deriving symbolic knowledge from data via abstraction, i.e., selective ignorance. The logical consequence relation is discussed for its proof-based theoretical correctness. The MNIST dataset is discussed for its experiment-based empirical correctness.


Inference of Abstraction for a Unified Account of Symbolic Reasoning from Data

Kido, Hiroyuki

arXiv.org Artificial Intelligence

For example, they have the implicit assumption consequence relation, an empirical consequence that the method used to extract symbolic knowledge relation, maximal consistent sets, maximal possible from data cannot be applied to the method used to perform sets and maximum likelihood estimation. The logical reasoning over the symbolic knowledge, and vice theory gives new insights into reasoning towards versa.


A Simple Generative Model of Logical Reasoning and Statistical Learning

Kido, Hiroyuki

arXiv.org Artificial Intelligence

Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence. Most existing work considers how to combine existing logical and statistical systems. However, there is no theory of inference so far explaining how basic approaches to statistical learning and logical reasoning stem from a common principle. Inspired by the fact that much empirical work in neuroscience suggests Bayesian (or probabilistic generative) approaches to brain function including learning and reasoning, we here propose a simple Bayesian model of logical reasoning and statistical learning. The theory is statistically correct as it satisfies Kolmogorov's axioms, is consistent with both Fenstad's representation theorem and maximum likelihood estimation and performs exact Bayesian inference with a linear-time complexity. The theory is logically correct as it is a data-driven generalisation of uncertain reasoning from consistency, possibility, inconsistency and impossibility. The theory is correct in terms of machine learning as its solution to generation and prediction tasks on the MNIST dataset is not only empirically reasonable but also theoretically correct against the K nearest neighbour method. We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic. Symbolic reasoning emerges as a result of the process of going the causality forwards and backwards. The forward and backward processes correspond to an interpretation and inverse interpretation in formal logic, respectively. The inverse interpretation differentiates our work from the mainstream often referred to as inverse entailment, inverse deduction or inverse resolution. The perspective gives new insights into learning and reasoning towards human-like machine intelligence.


Exploring the Landscape of Relational Syllogistic Logics

Kruckman, Alex, Moss, Lawrence S.

arXiv.org Artificial Intelligence

This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic. These are all decidable logical systems. We prove completeness theorems and complexity results for a natural subfamily of relational syllogistic logics, parametrized by constructors for terms and for sentences.


Bayes Meets Entailment and Prediction: Commonsense Reasoning with Non-monotonicity, Paraconsistency and Predictive Accuracy

Kido, Hiroyuki, Okamoto, Keishi

arXiv.org Artificial Intelligence

The recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic and learning are both practices of the human brain, it leads to another hypothesis that there is a Bayesian interpretation underlying both logical reasoning and machine learning. In this paper, we introduce a generative model of logical consequence relations. It formalises the process of how the truth value of a sentence is probabilistically generated from the probability distribution over states of the world. We show that the generative model characterises a classical consequence relation, paraconsistent consequence relation and nonmonotonic consequence relation. In particular, the generative model gives a new consequence relation that outperforms them in reasoning with inconsistent knowledge. We also show that the generative model gives a new classification algorithm that outperforms several representative algorithms in predictive accuracy and complexity on the Kaggle Titanic dataset.


Bayesian Entailment Hypothesis: How Brains Implement Monotonic and Non-monotonic Reasoning

Kido, Hiroyuki

arXiv.org Artificial Intelligence

Recent success of Bayesian methods in neuroscience and artificial intelligence gives rise to the hypothesis that the brain is a Bayesian machine. Since logic, as the laws of thought, is a product and practice of the human brain, it leads to another hypothesis that there is a Bayesian algorithm and data-structure for logical reasoning. In this paper, we give a Bayesian account of entailment and characterize its abstract inferential properties. The Bayesian entailment is shown to be a monotonic consequence relation in an extreme case. In general, it is a sort of non-monotonic consequence relation without Cautious monotony or Cut. The preferential entailment, which is a representative non-monotonic consequence relation, is shown to be maximum a posteriori entailment, which is an approximation of the Bayesian entailment. We finally discuss merits of our proposals in terms of encoding preferences on defaults, handling change and contradiction, and modeling human entailment.